On Monday, the CEOs of
Nvidia
and
Meta
had a conversation about the latest advancements in generative AI. They also discussed whatâs possible with the new technology at the SIGGRAPH 2024, the annual industry conference for computer graphics enthusiasts and professionals.
âItâs exciting. There is a lot of new stuff to build,â Zuckerberg said. âProgress on [AI] fundamental research is accelerating. Itâs a pretty wild timeâ
Zuckerberg said they have five years of product innovation ahead based on just current AI model technology. He believes every business will someday have an AI agent talking and interacting with customers.Â
The Meta executive also sees a world where the companyâs next 4.0 version of its Llama AI model will be able to become an agent, where you give it a command and it will come back to with an robust answer weeks or months later after doing its research and computation. The company released Llama version 3.1 last week.
Advertisement – Scroll to Continue
Huang praised Metaâs work in creating the Llama open-source AI model family. He said Llama is helping more developers and companies get access to AI model technology.Â
Meta is one of
Nvidiaâs
top customers. In January, Zuckerberg pledged that his company would have 350,000 Nvidia H100 graphics processing units and a total of nearly 600,000 H100 compute equivalent GPUs by the end of this year. âOur long term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit,â Zuckerberg wrote in a post at the time. âWeâre building massive compute infrastructure to support our future road mapâ for artificial intelligence.
Nvidia currently dominates the market for chips used in AI applications. Both start-ups and larger corporations prefer the companyâs products due to its robust programming platform, called CUDA, which offers AI-related tools that accelerate the development of AI projects.
Advertisement – Scroll to Continue
Write to Tae Kim at tae.kim@barrons.com